The practice of science involves formulating and testing hypotheses, assertions that are capable of being proven false using a test of observed data. The null hypothesis typically corresponds to a general or default position. For example, the null hypothesis might be that there is no relationship between two measured phenomena[1] or that a potential treatment has no effect.[2]
The term was originally coined by English geneticist and statistician Ronald Fisher in 1935.[3][4] It is typically paired with a second hypothesis, the alternative hypothesis, which asserts a particular relationship between the phenomena. Jerzy Neyman and Egon Pearson formalized the notion of the alternative. The alternative need not be the logical negation of the null hypothesis; it predicts the results from the experiment if the alternative hypothesis is true. The use of alternative hypotheses was not part of Fisher's formulation, but became standard.
It is important to understand that the null hypothesis can never be proven. A set of data can only reject a null hypothesis or fail to reject it. For example, if comparison of two groups (e.g.: treatment, no treatment) reveals no statistically significant difference between the two, it does not mean that there is no difference in reality. It only means that there is not enough evidence to reject the null hypothesis (in other words, the experiment fails to reject the null hypothesis).[5]
Contents |
Hypothesis testing works by collecting data and measuring how likely the particular set of data is, assuming the null hypothesis is true. If the data-set is very unlikely, defined as belonging to a set of data that only rarely will be observed (usually in less than either 5% of the time or 1% of the time), the experimenter rejects the null hypothesis concluding it (probably) is false. If the data do not contradict the null hypothesis, then only a weak conclusion can be made; namely that the observed dataset provides no strong evidence against the null hypothesis. As the null hypothesis could be true or false, in this case, in some contexts this is interpreted as meaning that the data give insufficient evidence to make any conclusion, on others it means that there is no evidence to support changing from a currently useful regime to a different one.
For instance, a certain drug may reduce the chance of having a heart attack. Possible null hypotheses are "this drug does not reduce the chances of having a heart attack" or "this drug has no effect on the chances of having a heart attack". The test of the hypothesis consists of administering the drug to half of the people in a study group as a controlled experiment. If the data show a statistically significant change in the people receiving the drug, the null hypothesis is rejected.
The choice of null hypothesis (H0) and consideration of directionality (see "one-tailed test") is critical. Consider the question of whether a tossed coin is fair (i.e. that on average it lands heads up 50% of the time). A potential null hypothesis is "this coin is not biased towards heads" (one-tail test). The experiment is to repeatedly toss the coin. A possible result of 5 tosses is 5 heads. Under this null hypothesis, the data are considered unlikely (with a fair coin, the probability of this is 3%). The data refute the null hypothesis: the coin is biased.
Alternatively, the null hypothesis, "this coin is fair" allows runs of tails as well as heads, increasing the probability of 5 of a kind to 6% (two-tail test), which is no longer statistically significant, preserving the null hypothesis.[6]
This example illustrates one hazard of hypothesis testing: evaluating a large number of true null hypotheses against a single dataset is likely to spuriously reject some of them because of the inevitable noise in the data. However, formulating the null hypothesis before collecting data rejects a true null hypothesis only a small percent of the time.
In scientific and medical research, null hypotheses play a major role in testing the significance of differences in treatment and control groups. This use, while widespread, offers several grounds for criticism, including straw man, Bayesian criticism and publication bias.
The typical null hypothesis at the outset of the experiment is that no difference exists between the control and experimental groups (for the variable being compared). Other possibilities include:
Given the test scores of two random samples of men and women, does one group differ from the other? A possible null hypothesis is that the mean male score is the same as the mean female score:
where:
A stronger null hypothesis is that the two samples are drawn from the same population, such that the variance and shape of the distributions are also equal.
Much of the terminology used in connection with null hypotheses derives from the immediate relation to statistical hypothesis testing; part of this terminology is outlined here, but see this list of definitions for a more complete set.
A point hypothesis is more complicated to describe. The term arises in contexts where the set of all possible population distributions is put in parametric form. A point hypothesis is one where exact values are specified for either all the parameters or for a subset of the parameters. Formally, the case where only a subset of parameters is defined is still a composite hypothesis; nonetheless, the term point hypothesis is often applied in such cases, particularly where the hypothesis test can be structured in such a way that the distribution of the test statistic (the distribution under the null hypothesis) does not depend on the parameters whose values have not been specified under the point null hypothesis. Careful treatments of point hypotheses for subsets of parameters do consider them as composite hypotheses and study how the p-value for a fixed critical value of the test statistic varies with the parameters that are not specified by the null hypothesis.
A one-tailed hypothesis is a hypothesis in which the value of a parameter is specified as being either:
An example of a one-tailed null hypothesis would be that, in a medical context, an existing treatment, A, is no worse than a new treatment, B. The corresponding alternative hypothesis would be that B is better than A. Here if the null hypothesis were accepted (i.e. there is no reason to reject the hypothesis that A is at least as good as B), the conclusion would be that treatment A should continue to be used. If the null hypothesis were rejected, the result would be that treatment B would used in future, given that there is evidence that it is better than A. A hypothesis test would look for evidence that B is better than A, not for evidence that the outcomes of treatments A and B are different. Formulating the hypothesis as a "better than" comparison is said to give the hypothesis directionality.
Quite often statements of point null hypotheses appear not to have a "directionality", namely, that values larger or smaller than a hypothesized value are conceptually identical. However, null hypotheses can and do have "direction"—in many instances statistical theory allows the formulation of the test procedure to be simplified, thus the test is equivalent to testing for an exact identity. For instance, when formulating a one-tailed alternative hypothesis, application of Drug A will lead to increased growth in patients, then the true null hypothesis is the opposite of the alternative hypothesis, i.e. application of Drug A will not lead to increased growth in patients (a composite null hypothesis). The effective null hypothesis will be application of Drug A will have no effect on growth in patients (a point null hypothesis).
In order to understand why the effective null hypothesis is valid, it is instructive to consider the above hypotheses. The alternative predicts that exposed patients experience increased growth compared to the control group. That is,
The true null hypothesis is:
The effective null hypothesis is:
The reduction occurs because, in order to gauge support for the alternative, classical hypothesis testing requires calculating how often the results would be as or more extreme than the observations. This requires measuring the probability of rejecting the null hypothesis for each possibility it includes, and second to ensure that these probabilities are all less than or equal to the test's quoted significance level. For reasonable test procedures the largest such probability occurs on the region boundary HT, specifically for the cases included in H0 only. Thus the test procedure can be defined (that is the critical values can be defined) for testing the null hypothesis HT exactly as if the null hypothesis of interest was the reduced version H0.
Fisher said, "the null hypothesis must be exact, that is free of vagueness and ambiguity, because it must supply the basis of the 'problem of distribution,' of which the test of significance is the solution", implying a more restrictive domain for H0.[7] According to this view, the null hypothesis must be numerically exact—it must state that a particular quantity or difference is equal to a particular number. In classical science, it is most typically the statement that there is no effect of a particular treatment; in observations, it is typically that there is no difference between the value of a particular measured variable and that of a prediction. The majority of null hypotheses in practice do not meet this "exactness" criterion. For example, consider the usual test that two means are equal where the true values of the variances are unknown—exact values of the variances are not specified.
Most statisticians believe that it is valid to state direction as a part of null hypothesis, or as part of a null hypothesis/alternative hypothesis pair.[8] The logic is quite simple: if the direction is omitted, then if the null hypothesis is not rejected it is quite confusing to interpret the conclusion. For example, consider an H0 that claims the population mean = 10, with the one-tailed alternative mean > 10. If the sample evidence obtained through x-bar equals −200 and the corresponding t-test statistic equals −50, what is the conclusion? Not enough evidence to reject the null hypothesis? Surely not! But we cannot accept the one-sided alternative in this case. Therefore, to overcome this ambiguity, it is better to include the direction of the effect if the test is one-sided. The statistical theory required to deal with the simple cases dealt with here, and more complicated ones, makes use of the concept of an unbiased test.
Statistical hypothesis testing involves performing the same experiment on multiple subjects. The number of subjects is known as the sample size. The properties of the procedure depends on the sample size. Even if a null hypothesis does not hold for the population, an insufficient sample size may prevent its rejection. If sample size is under a researcher's control, a good choice depends on the statistical power of the test, the effect size that the test must reveal and the desired significance level. The significance level is the probability of rejecting the null hypothesis when the null hypothesis holds in the population. The statistical power is the probability of rejecting the null hypothesis when it does not hold in the population (i.e., for a particular effect size).
|